This empirical study is mainly devoted to comparing four tree-based boostingalgorithms: mart, abc-mart, robust logitboost, and abc-logitboost, formulti-class classification on a variety of publicly available datasets. Some ofthose datasets have been thoroughly tested in prior studies using a broad rangeof classification algorithms including SVM, neural nets, and deep learning. In terms of the empirical classification errors, our experiment resultsdemonstrate: 1. Abc-mart considerably improves mart. 2. Abc-logitboost considerablyimproves (robust) logitboost. 3. Robust) logitboost} considerably improves marton most datasets. 4. Abc-logitboost considerably improves abc-mart on mostdatasets. 5. These four boosting algorithms (especially abc-logitboost)outperform SVM on many datasets. 6. Compared to the best deep learning methods,these four boosting algorithms (especially abc-logitboost) are competitive.
展开▼